Goto

Collaborating Authors

 private message


When AI Agents Collude Online: Financial Fraud Risks by Collaborative LLM Agents on Social Platforms

Ren, Qibing, Zheng, Zhijie, Guo, Jiaxuan, Yan, Junchi, Ma, Lizhuang, Shao, Jing

arXiv.org Artificial Intelligence

In this work, we study the risks of collective financial fraud in large-scale multi-agent systems powered by large language model (LLM) agents. We investigate whether agents can collaborate in fraudulent behaviors, how such collaboration amplifies risks, and what factors influence fraud success. To support this research, we present MultiAgentFraudBench, a large-scale benchmark for simulating financial fraud scenarios based on realistic online interactions. The benchmark covers 28 typical online fraud scenarios, spanning the full fraud lifecycle across both public and private domains. We further analyze key factors affecting fraud success, including interaction depth, activity level, and fine-grained collaboration failure modes. Finally, we propose a series of mitigation strategies, including adding content-level warnings to fraudulent posts and dialogues, using LLMs as monitors to block potentially malicious agents, and fostering group resilience through information sharing at the societal level. Notably, we observe that malicious agents can adapt to environmental interventions. Our findings highlight the real-world risks of multi-agent financial fraud and suggest practical measures for mitigating them. Code is available at https://github.com/zheng977/MutiAgent4Fraud.


Early Signs of Steganographic Capabilities in Frontier LLMs

Zolkowski, Artur, Nishimura-Gasparian, Kei, McCarthy, Robert, Zimmermann, Roland S., Lindner, David

arXiv.org Artificial Intelligence

Monitoring Large Language Model (LLM) outputs is crucial for mitigating risks from misuse and misalignment. However, LLMs could evade monitoring through steganography: Encoding hidden information within seemingly benign generations. In this paper, we evaluate the steganography capabilities in frontier LLMs to better understand the risk they pose. We focus on two types of steganography: passing encoded messages and performing encoded reasoning. We find that current models are unable to encode short messages in their outputs without a monitor noticing under standard affordances. They can succeed, however, if given additional affordances like using an unmonitored scratchpad and coordinating on what encoding scheme to use. We additionally find early signs that models can perform basic encoded reasoning in a simple state-tracking problem. This includes some ability to reason with their own and pre-defined schemes, including encoding schemes such as Hexadecimal. Despite this, they can rarely hide reasoning subtly within a cover task to fool a monitor. Overall, our results indicate that current LLMs exhibit nascent steganographic capabilities. While these capabilities are likely insufficient to bypass well-designed monitors at present, this could change in the future.


Urgent warning as 1.5 MILLION private photos are leaked from BDSM dating apps - so, have your sexy snaps been exposed?

Daily Mail - Science & tech

Cybersecurity researchers have issued an urgent warning as almost 1.5 million private photos from dating apps are exposed. Affected apps include the kink dating sites BDSM People and CHICA, as well as LGBT dating services PINK, BRISH, and TRANSLOVE - all of which were developed by M.A.D Mobile. The leaked files include photos used for verification, photos removed by app moderators, and photos sent in direct messages between users - many of which were explicit. These sensitive snaps were being stored online without password protection, meaning anyone with the link could view and download them. Researchers from Cybernews, who discovered the vulnerability, say this easily exploited security flaw put up to 900,000 users at risk of further hacks or extortion.


Meta tool to block nude images in teens' private messages

BBC News

It will use machine learning only to identify nudity and will work entirely on device, the BBC understands. According to Meta, using machine learning to identify child abuse is much harder and there would be a serious risk of errors if this was attempted across its billions of users with the potential of innocent people being reported with grave consequences..


Sum-Rate Maximization of RSMA-based Aerial Communications with Energy Harvesting: A Reinforcement Learning Approach

Seong, Jaehyup, Toka, Mesut, Shin, Wonjae

arXiv.org Artificial Intelligence

In this letter, we investigate a joint power and beamforming design problem for rate-splitting multiple access (RSMA)-based aerial communications with energy harvesting, where a self-sustainable aerial base station serves multiple users by utilizing the harvested energy. Considering maximizing the sum-rate from the long-term perspective, we utilize a deep reinforcement learning (DRL) approach, namely the soft actor-critic algorithm, to restrict the maximum transmission power at each time based on the stochastic property of the channel environment, harvested energy, and battery power information. Moreover, for designing precoders and power allocation among all the private/common streams of the RSMA, we employ sequential least squares programming (SLSQP) using the Han-Powell quasi-Newton method to maximize the sum-rate for the given transmission power via DRL. Numerical results show the superiority of the proposed scheme over several baseline methods in terms of the average sum-rate performance.


Optimal Power Allocation for Rate Splitting Communications with Deep Reinforcement Learning

Hieu, Nguyen Quang, Hoang, Dinh Thai, Niyato, Dusit, Kim, Dong In

arXiv.org Artificial Intelligence

This letter introduces a novel framework to optimize the power allocation for users in a Rate Splitting Multiple Access (RSMA) network. In the network, messages intended for users are split into different parts that are a single common part and respective private parts. This mechanism enables RSMA to flexibly manage interference and thus enhance energy and spectral efficiency. Although possessing outstanding advantages, optimizing power allocation in RSMA is very challenging under the uncertainty of the communication channel and the transmitter has limited knowledge of the channel information. To solve the problem, we first develop a Markov Decision Process framework to model the dynamic of the communication channel. The deep reinforcement algorithm is then proposed to find the optimal power allocation policy for the transmitter without requiring any prior information of the channel. The simulation results show that the proposed scheme can outperform baseline schemes in terms of average sum-rate under different power and QoS requirements.


Chrissy Teigen suggests Ben Affleck, Matthew Perry are 'creepy' for alleged dating app behavior

FOX News

Fox News Flash top entertainment and celebrity headlines are here. Check out what's clicking today in entertainment. Chrissy Teigen is giving her two cents on the recent claims that A-list stars Ben Affleck and Matthew Perry are pursuing young women on dating apps. Teigen, who is married to musician John Legend, appears to have caught wind of the celeb dating app stories making their way around the Internet this week. First, Affleck went viral after a woman named Nivine Jay shared a video she received of the actor after matching with him on Raya.


WhatsApp adds fingerprint lock for Android to stop friends sneaking a look at your private messages

Daily Mail - Science & tech

WhatsApp is introducing a new feature for Android users – a fingerprint lock that will prevent your friends and family from sneaking a look at your private messages when you leave the room. If Android users choose to enable the new feature on their phone, then anyone attempting to open WhatsApp will be required to scan their fingerprint before they can read the messages stored in the app. A similar feature was introduced on WhatsApp for iOS earlier this year, giving iPhone users the ability to protect their messages using either the phone's fingerprint reader (Touch ID) or Apple's facial recognition system (Face ID). However, this is the first time the feature will be available on Android devices, providing an'extra layer of security' for WhatsApp users who own devices running Google's software. Users of WhatsApp for Android can enable the new feature by opening the WhatApp app and going to Settings Account Privacy Fingerprint lock. Here they can turn on the'Unlock with fingerprint' feature, and choose whether the lock activates immediately, after one minute, or after 30 minutes.


Is Facebook finished? 'We're not far from Zuckerberg getting subpoenaed', privacy expert says

The Independent - Tech

Even for a company as serially scandalous as Facebook, it's been a bad week for the social network. Separate investigations revealed that Facebook gave more than 150 firms access to people's private messages, while also making it impossible for users to avoid location-based ads. After months of fallout from the Cambridge Analytica scandal, US prosecutors also finally got around to filing a lawsuit against Facebook for its data sharing practices. Individually, none of these would likely be enough to bring Facebook down, but some experts believe that, collectively, this could signal the end for the internet behemoth. David Carroll, an associate professor at Parsons School of Design in New York, said this week may finally have dealt Facebook its "knockout" blow.


Serious Cloudflare bug revealed secret user data from major websites

PCWorld

For months, a bug in Cloudflare's content optimization systems exposed sensitive information sent by users to websites that use the company's content delivery network. The data included passwords, session cookies, authentication tokens and even private messages. Cloudflare acts as a reverse proxy for millions of websites, including those of major internet services and Fortune 500 companies, for which it provides security and content optimization services behind the scenes. As part of that process, the company's systems modify HTML pages as they pass through its servers in order to rewrite HTTP links to HTTPS, hide certain content from bots, obfuscate email addresses, enable Accelerated Mobile Pages (AMP) and more. The bug that exposed user data was in an older HTML parser that the company had used for many years.